33 research outputs found

    Convergence Analysis of Iterative Methods for Nonsmooth Convex Optimization over Fixed Point Sets of Quasi-Nonexpansive Mappings

    Full text link
    This paper considers a networked system with a finite number of users and supposes that each user tries to minimize its own private objective function over its own private constraint set. It is assumed that each user's constraint set can be expressed as a fixed point set of a certain quasi-nonexpansive mapping. This enables us to consider the case in which the projection onto the constraint set cannot be computed efficiently. This paper proposes two methods for solving the problem of minimizing the sum of their nondifferentiable, convex objective functions over the intersection of their fixed point sets of quasi-nonexpansive mappings in a real Hilbert space. One method is a parallel subgradient method that can be implemented under the assumption that each user can communicate with other users. The other is an incremental subgradient method that can be implemented under the assumption that each user can communicate with its neighbors. Investigation of the two methods' convergence properties for a constant step size reveals that, with a small constant step size, they approximate a solution to the problem. Consideration of the case in which the step-size sequence is diminishing demonstrates that the sequence generated by each of the two methods strongly converges to the solution to the problem under certain assumptions. Convergence rate analysis of the two methods under certain situations is provided to illustrate the two methods' efficiency. This paper also discusses nonsmooth convex optimization over sublevel sets of convex functions and provides numerical comparisons that demonstrate the effectiveness of the proposed methods

    Almost Sure Convergence of Random Projected Proximal and Subgradient Algorithms for Distributed Nonsmooth Convex Optimization

    Full text link
    Two distributed algorithms are described that enable all users connected over a network to cooperatively solve the problem of minimizing the sum of all users' objective functions over the intersection of all users' constraint sets, where each user has its own private nonsmooth convex objective function and closed convex constraint set, which is the intersection of a number of simple, closed convex sets. One algorithm enables each user to adjust its estimate by using a proximity operator of its objective function and the metric projection onto one set randomly selected from the simple, closed convex sets. The other is a distributed random projection algorithm that determines each user's estimate by using a subgradient of its objective function instead of the proximity operator. Investigation of the two algorithms' convergence properties for a diminishing step-size rule revealed that, under certain assumptions, the sequences of all users generated by each of the two algorithms converge almost surely to the same solution. Moreover, convergence rate analysis of the two algorithms is provided, and desired choices of the step size sequences such that the two algorithms have fast convergence are discussed. Numerical comparisons for concrete nonsmooth convex optimization support the convergence analysis and demonstrate the effectiveness of the two algorithms

    Line Search Fixed Point Algorithms Based on Nonlinear Conjugate Gradient Directions: Application to Constrained Smooth Convex Optimization

    Full text link
    This paper considers the fixed point problem for a nonexpansive mapping on a real Hilbert space and proposes novel line search fixed point algorithms to accelerate the search. The termination conditions for the line search are based on the well-known Wolfe conditions that are used to ensure the convergence and stability of unconstrained optimization algorithms. The directions to search for fixed points are generated by using the ideas of the steepest descent direction and conventional nonlinear conjugate gradient directions for unconstrained optimization. We perform convergence as well as convergence rate analyses on the algorithms for solving the fixed point problem under certain assumptions. The main contribution of this paper is to make a concrete response to an issue of constrained smooth convex optimization; that is, whether or not we can devise nonlinear conjugate gradient algorithms to solve constrained smooth convex optimization problems. We show that the proposed fixed point algorithms include ones with nonlinear conjugate gradient directions which can solve constrained smooth convex optimization problems. To illustrate the practicality of the algorithms, we apply them to concrete constrained smooth convex optimization problems, such as constrained quadratic programming problems and generalized convex feasibility problems, and numerically compare them with previous algorithms based on the Krasnosel'ski\u\i-Mann fixed point algorithm. The results show that the proposed algorithms dramatically reduce the running time and iterations needed to find optimal solutions to the concrete optimization problems compared with the previous algorithms.Comment: 33 pages, 16 figures, 4 table

    Proximal Point Algorithms for Nonsmooth Convex Optimization with Fixed Point Constraints

    Full text link
    The problem of minimizing the sum of nonsmooth, convex objective functions defined on a real Hilbert space over the intersection of fixed point sets of nonexpansive mappings, onto which the projections cannot be efficiently computed, is considered. The use of proximal point algorithms that use the proximity operators of the objective functions and incremental optimization techniques is proposed for solving the problem. With the focus on fixed point approximation techniques, two algorithms are devised for solving the problem. One blends an incremental subgradient method, which is a useful algorithm for nonsmooth convex optimization, with a Halpern-type fixed point iteration algorithm. The other is based on an incremental subgradient method and the Krasnosel'ski\u\i-Mann fixed point algorithm. It is shown that any weak sequential cluster point of the sequence generated by the Halpern-type algorithm belongs to the solution set of the problem and that there exists a weak sequential cluster point of the sequence generated by the Krasnosel'ski\u\i-Mann-type algorithm, which also belongs to the solution set. Numerical comparisons of the two proposed algorithms with existing subgradient methods for concrete nonsmooth convex optimization show that the proposed algorithms achieve faster convergence

    Two Stochastic Optimization Algorithms for Convex Optimization with Fixed Point Constraints

    Full text link
    Two optimization algorithms are proposed for solving a stochastic programming problem for which the objective function is given in the form of the expectation of convex functions and the constraint set is defined by the intersection of fixed point sets of nonexpansive mappings in a real Hilbert space. This setting of fixed point constraints enables consideration of the case in which the projection onto each of the constraint sets cannot be computed efficiently. Both algorithms use a convex function and a nonexpansive mapping determined by a certain probabilistic process at each iteration. One algorithm blends a stochastic gradient method with the Halpern fixed point algorithm. The other is based on a stochastic proximal point algorithm and the Halpern fixed point algorithm; it can be applied to nonsmooth convex optimization. Convergence analysis showed that, under certain assumptions, any weak sequential cluster point of the sequence generated by either algorithm almost surely belongs to the solution set of the problem. Convergence rate analysis illustrated their efficiency, and the numerical results of convex optimization over fixed point sets demonstrated their effectiveness

    Incremental and Parallel Machine Learning Algorithms with Automated Learning Rate Adjustments

    Full text link
    The existing machine learning algorithms for minimizing the convex function over a closed convex set suffer from slow convergence because their learning rates must be determined before running them. This paper proposes two machine learning algorithms incorporating the line search method, which automatically and algorithmically finds appropriate learning rates at run-time. One algorithm is based on the incremental subgradient algorithm, which sequentially and cyclically uses each of the parts of the objective function; the other is based on the parallel subgradient algorithm, which uses parts independently in parallel. These algorithms can be applied to constrained nonsmooth convex optimization problems appearing in tasks of learning support vector machines without adjusting the learning rates precisely. The proposed line search method can determine learning rates to satisfy weaker conditions than the ones used in the existing machine learning algorithms. This implies that the two algorithms are generalizations of the existing incremental and parallel subgradient algorithms for solving constrained nonsmooth convex optimization problems. We show that they generate sequences that converge to a solution of the constrained nonsmooth convex optimization problem under certain conditions. The main contribution of this paper is the provision of three kinds of experiment showing that the two algorithms can solve concrete experimental problems faster than the existing algorithms. First, we show that the proposed algorithms have performance advantages over the existing ones in solving a test problem. Second, we compare the proposed algorithms with a different algorithm Pegasos, which is designed to learn with a support vector machine efficiently, in terms of prediction accuracy, value of the objective function, and computational time. Finally, we use..

    Fixed Point Quasiconvex Subgradient Method

    Full text link
    Constrained quasiconvex optimization problems appear in many fields, such as economics, engineering, and management science. In particular, fractional programming, which models ratio indicators such as the profit/cost ratio as fractional objective functions, is an important instance. Subgradient methods and their variants are useful ways for solving these problems efficiently. Many complicated constraint sets onto which it is hard to compute the metric projections in a realistic amount of time appear in these applications. This implies that the existing methods cannot be applied to quasiconvex optimization over a complicated set. Meanwhile, thanks to fixed point theory, we can construct a computable nonexpansive mapping whose fixed point set coincides with a complicated constraint set. This paper proposes an algorithm that uses a computable nonexpansive mapping for solving a constrained quasiconvex optimization problem. We provide convergence analyses for constant and diminishing step-size rules. Numerical comparisons between the proposed algorithm and an existing algorithm show that the proposed algorithm runs stably and quickly even when the running time of the existing algorithm exceeds the time limit

    Appropriate Learning Rates of Adaptive Learning Rate Optimization Algorithms for Training Deep Neural Networks

    Full text link
    This paper deals with nonconvex stochastic optimization problems in deep learning and provides appropriate learning rates with which adaptive learning rate optimization algorithms, such as Adam and AMSGrad, can approximate a stationary point of the problem. In particular, constant and diminishing learning rates are provided to approximate a stationary point of the problem. Our results also guarantee that the adaptive learning rate optimization algorithms can approximate global minimizers of convex stochastic optimization problems. The adaptive learning rate optimization algorithms are examined in numerical experiments on text and image classification. The experiments show that the algorithms with constant learning rates perform better than ones with diminishing learning rates

    Sufficient Descent Riemannian Conjugate Gradient Method

    Full text link
    This paper considers sufficient descent Riemannian conjugate gradient methods with line search algorithms. We propose two kinds of sufficient descent nonlinear conjugate gradient methods and prove these methods satisfy the sufficient descent condition even on Riemannian manifolds. One is the hybrid method combining the Fletcher-Reeves-type method with the Polak-Ribiere-Polyak-type method, and the other is the Hager-Zhang-type method, both of which are generalizations of those used in Euclidean space. Also, we generalize two kinds of line search algorithms that are widely used in Euclidean space. In addition, we numerically compare our generalized methods by solving several Riemannian optimization problems. The results show that the performance of the proposed hybrid method greatly depends regardless of the type of line search used. Meanwhile, the Hager-Zhang-type method has the fast convergence property regardless of the type of line search used.Comment: 19 pages, 6 figure

    Riemannian Stochastic Fixed Point Optimization Algorithm

    Full text link
    This paper considers a stochastic optimization problem over the fixed point sets of quasinonexpansive mappings on Riemannian manifolds. The problem enables us to consider Riemannian hierarchical optimization problems over complicated sets, such as the intersection of many closed convex sets, the set of all minimizers of a nonsmooth convex function, and the intersection of sublevel sets of nonsmooth convex functions. We focus on adaptive learning rate optimization algorithms, which adapt step-sizes (referred to as learning rates in the machine learning field) to find optimal solutions quickly. We then propose a Riemannian stochastic fixed point optimization algorithm, which combines fixed point approximation methods on Riemannian manifolds with the adaptive learning rate optimization algorithms. We also give convergence analyses of the proposed algorithm for nonsmooth convex and smooth nonconvex optimization. The analysis results indicate that, with small constant step-sizes, the proposed algorithm approximates a solution to the problem. Consideration of the case in which step-size sequences are diminishing demonstrates that the proposed algorithm solves the problem with a guaranteed convergence rate. This paper also provides numerical comparisons that demonstrate the effectiveness of the proposed algorithms with formulas based on the adaptive learning rate optimization algorithms, such as Adam and AMSGrad
    corecore